SYNCHRONIZE(TM)
Technical Overview

Architecture

Synchronize is designed to meet the demanding performance requirements of the corporate environment. The design directly addresses key issues of scalability, speed, and ease of administration. Unlike personal schedulers (or personal schedulers retrofitted to provide "group scheduling"), Synchronize maintains scheduling information for a set of users in a common database, which provides the fastest access to group scheduling information, minimizes desktop storage and backup issues and is also the most straightforward model for administration. The scalability issue is addressed by the client/server design with distributed database support.

Client/Server

The implementation of a client/server model was a key performance decision. The Synchronize server (a process) acts as the conduit through which all Synchronize clients access the database. All communication is initiated by a client request that returns an acknowledgment from the server. By separating the client and server processes, Synchronize achieves performance gains by caching recently requested information on the client side and recently retrieved information on the server side. The most significant advantage of the client/server design, with respect to scalability, is the support for distributed databases.

Distributed Databases

As the number of users in an enterprise grows, the size of the common database grows correspondingly. Because the Synchronize server handles all client requests, the actual location of the Synchronize database is transparent to the user. This allows the administrator to distribute the common database into many networked databases using whatever distribution scheme is appropriate for the enterprise. The resultant distributed databases are still functionally a "common database" but they are physically separate. These databases can exist on a few different machines at a particular site or on thousands of machines worldwide.

When a user schedules a calendar item across various databases, the data is sent to the user's local server, where the item can be scheduled without requiring the client to wait for acceptance by remote servers. The Synchronize forwarding agent then routes the item (along with other queued items) to the appropriate remote database. If the remote database is not available, the forwarding agent automatically tries again, at a later time.

Optimizations

Server Caching

In order for each server to support hundreds of clients, the server must minimize slower operations such as I/O. To effect such minimization, both read and write caches are implemented in the server. For example, when the server is the only process writing to the database, it's read cache never gets stale. That is, once a file is read in by the server, it never has to be re-read in normal operation. In addition to the read cache, a write caching mechanism has been implemented. All write requests by clients are buffered and flushed to disk at an interval defined by the Synchronize administrator. In this fashion, clients usually do not have to wait for disk I/O in order to get responses to their requests.

Server Multiplexing

The limitation on the number of open file descriptors (this number ranges from 60 to 255 depending upon the platform) necessitates that the server close the connected sockets on old clients before accepting new clients as the limit is approached. The connection for the dropped clients is restored upon their next request to the server, with a reconnect delay of one to two seconds experienced by the user. A good side effect of this strategy is that servers can be killed and restarted without the client being aware of these operations.

Client-side caching

Client-side caching provides quick database access and eliminates unnecessary network traffic and redundant client requests. When a client makes a request to the database server, a tag is sent with the request reflecting the state of the data in the client cache. The server then checks for validity of that data. If the data are valid, the server simply informs the client of that fact. If not, new data reflecting the client request are sent, at which point the client flushes the invalid portion of its cache, (which is then replaced by the valid data). Because all changes made by the client are automatically updated to the local cache, the cache becomes invalid only if changes are made by other clients.


Technical Pages

Technical Summary
Supported Environments
Administration



Product Information News Flash! Technical Information
CrossWind Home Test Drive

CrossWind Technologies, Inc.
1505 Ocean Street, Suite 1
Santa Cruz, Ca. 95060

Phone:408-469-1780
Fax:408-469-1750
Comments? Send E-mail to: webmaster@crosswind.com
Last revised: January, 1996